The Value of Markov Chain Games with Lack of Information on One Side
نویسنده
چکیده
We consider a two-player zero-sum game given by a Markov chain over a finite set of states K and a family of zero-sum matrix games (G)k∈K . The sequence of states follows the Markov chain. At the beginning of each stage, only player 1 is informed of the current state k, then the game G is played, the actions played are observed by both players and the play proceeds to the next stage. We call such a game a Markov chain game with lack of information on one side. This model generalizes the model, introduced by Aumann and Maschler in the sixties, of zero-sum repeated games with lack of information on one side (which corresponds to a constant Markov chain). We generalize the proof of Aumann and Maschler and, from the definition and the study of appropriate “non revealing” auxiliary games with infinitely many stages, show the existence of the uniform value. An important difference with Aumann and Maschler’s model is that here, the notions for player 1 of using the information and revealing a relevant information are distinct.
منابع مشابه
The Value of Markov Chain Games with Incomplete Information on Both Sides
We consider zero-sum repeated games with incomplete information on both sides, where the states privately observed by each player follow independent Markov chains. It generalizes the model, introduced by Aumann and Maschler in the sixties and solved by Mertens and Zamir in the seventies, where the private states of the players were fixed. It also includes the model introduced in Renault [20], o...
متن کاملMarkov Games with Frequent Actions and Incomplete Information - The Limit Case
We study a two-player, zero-sum, stochastic game with incomplete information on one side in which the players are allowed to play more and more frequently. The informed player observes the realization of a Markov chain on which the payoffs depend, while the non-informed player only observes his opponent’s actions. We show the existence of a limit value as the time span between two consecutive s...
متن کاملMarkov games with frequent actions and incomplete information
We study a two-player, zero-sum, stochastic game with incomplete information on one side in which the players are allowed to play more and more frequently. The informed player observes the realization of a Markov chain on which the payoffs depend, while the non-informed player only observes his opponent’s actions. We show the existence of a limit value as the time span between two consecutive s...
متن کاملFinancial Risk Modeling with Markova Chain
Investors use different approaches to select optimal portfolio. so, Optimal investment choices according to return can be interpreted in different models. The traditional approach to allocate portfolio selection called a mean - variance explains. Another approach is Markov chain. Markov chain is a random process without memory. This means that the conditional probability distribution of the nex...
متن کاملA New Model to Speculate CLV Based on Markov Chain Model
The present study attempts to establish a new framework to speculate customer lifetime value by a stochastic approach. In this research the customer lifetime value is considered as combination of customer’s present and future value. At first step of our desired model, it is essential to define customer groups based on their behavior similarities, and in second step a mechanism to count current ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Math. Oper. Res.
دوره 31 شماره
صفحات -
تاریخ انتشار 2006